You're listening to Lifelong Learning on ReachMD. The following program was recorded at the 2018 annual meeting for the Alliance for Continuing Education in the Health Professions. Here is your host Alicia Sutton.
Alicia Sutton: We are broadcasting from the Alliance for Continuing Education in the Health Professions at the annual meeting in Orlando, Florida. I am excited to have an interview, actually a two-part interview, I'm starting with my two guests. We're going to be talking about predicting the potential for a CE activity to lead to behavior change and it's that predicting part that we're going to get to. Please introduce yourselves.
Hilary Schmidt: I'm Hilary Schmidt. I am with the Calibre Institute for quality medical education.
Alicia Sutton: Thank you for joining us.
Hilary Schmidt: Thank you for having me.
John Ruggiero: I am John Ruggiero. I am within medical affairs organization within Genentech, a supporter within this industry and I am happy to be here as well.
Alicia Sutton: Terrific. Thank you, guys. So, we'll put the lay of the land out there that there's somewhere around 150,000 certified activities any given year that clinicians can go for and so it begs the question of how do we know which ones might be effective and actually not that many are effective in changing behavior. Can you address that for us a little bit?
Hilary Schmidt: Sure, so lots of metaanalysis and research shows that probably less than 20% of the education activities out there actually drive behavior change and probably less than 5% actually lead to improvements in patient outcomes. The question is how do you health care providers, the consumers of that education, actually find education that meets their needs and that has the greatest predictive value to improve patient care? Another challenge is when do we actually find out whether or not an educational activity is driving improvements in performance in patient care.
Alicia Sutton: And that's sometimes because it's many months down the road that you get feedback from the outcome studies.
Hilary Schmidt: Way down the road. Frequently, the activity is no longer even available. That is a huge challenge. Wouldn't it be fantastic if we had a prospective way to identify that education that has the highest potential to lead to behavior change and that was the challenge that we set out to solve.
Alicia Sutton: That's interesting and John, from your perspective obviously in providing these educational grants to effect change in health care, what's the thinking in your organization?
John Ruggiero: What's so important to recognize here is I was so engaged and excited when Hilary asked us to be a supporter of such an initiative because if we truly believe as educators that we are adult learning experts or believe in the principles of adult learning, believe in cognition, believe in behavior change theories and the practices that help implement those changes, we have to be willing to suggest that there is strong science behind predictability. In order for me as a supporter to continue being a good steward of the finances that I provide for all these great educational activities in this industry, I have to be able to predict what is going to be most effective so I can generate the win and the success story from my leadership and within my organization.
Alicia Sutton: Thank you for that. What are the underlying principles behind sort of the solution that you came up with?
Hilary Schmidt: We know a lot about the variables that drive behavior change based on research that goes back decades from the cognitive, behavioral and social sciences. So we thought why not leverage that, identify those variables that we know have been proven to drive rapid learning , to drive retention of information, drive recall of information at the point of care, and, more importantly, the ability to use that information fluidly in caring for patients. So we know a lot about that research, why not look at it, identify variables that are related to instructional design, and measure activities based on the extent to which they include instructional design elements that have been proven to drive behavior change.
Alicia Sutton: That's interesting. Are these elements applicable across any format in education?
Hilary Schmidt: That's a great question and the answer is that learning is learning regardless of whether you're reading information, whether you're in a live activity, whether you're participating in a webcast or an online activity, and the same principles that determine whether you understand information rapidly, remember and can use information fluidly apply across the full range. However, in the study that we're doing right now, we're focusing on online activities because they're around for two years.
Alicia Sutton: Right. The second part of our interview we will talk about data outcomes and what happened with this. Where did you conduct this study? Where were the users from, what were we looking at? A little bit about that target audience.
John Ruggiero: One of the first ways to respond to that is Hilary and her organization were successful in gathering support from eight different supporters, which is a tremendous feat by the way. The fact that there were so many supporters behind wanting to do something that's more effective, something that allows them to drive home the appropriate message and to support the appropriate types of education is just really tremendous. So eight supporters selecting a various number of activities from many different providers, which came to about 141 different activities that were identified amongst the eight supporters. Now you'll find out in part two that 141 went down to 77 because not all the activities met the expected criteria that made this scientifically rigorous. It was just an amazing process to just be a part of this and just have constant conversation around what we were supplying and what we could help address for the specific initiative.
Alicia Sutton: I like that. Where do you think, looking out five years from now, from both of your perspectives, from the educational design, and I use that term quite broadly because you are an expert I know for many years of history in it, and from the funder side. So from both of your perspectives, five years down the road imagine this tool being used more, what do you think is going to happen with education?
Hilary Schmidt: I say that if this tool does turn out, as we strongly believe it will, to have the predictive value that it's designed to determine, we will have stakeholders in the CE world who are able to find quickly and easily education that is the most engaging, the most effective, and has the greatest impact on patient outcomes. They will be able to do that rapidly without guessing and within a very short time after an activity becomes available to the learners.
Alicia Sutton: Are you hoping that perhaps there will be a demarcation, a scale, put on it?
Hilary Schmidt: We love the idea of a scale, in particular recognizing education that meets the highest standards, maybe it will be the top 10% of all the education that we see, kind of a good housekeeping seal of approval or Michelin star type activity. We haven't determined exactly what that criteria will be yet, but the goal is that we would love to recognize best designed education and then people can learn what that looks like, knows it has predictive value and people can find it easily.
John Ruggiero: As a former provider, I would see this as an incredible tool to help me assess whether or not I was meeting specific criteria against the baseline. I would look at this not as a rejection of my great idea but ways that I could perhaps tailor my education to make it more effective and more transformational. As a supporter, this is all about making sure that the education you support is going to a transformation goal. So, it's not just about an ____ acquisition. How can we help address the current pressures within the current health care marketplace to show that education is truly an agent of change. So, to answer your question very simply, although that was long-winded, I apologize for that, it's just really focused on the following. We need predictability in times of disruption, it is key. We need to have perhaps a more standardized outcomes process and we need to be open to the fact that current outcomes models need to be evolved to meet the current demands.
Alicia Sutton: That's excellent John. Because of your background also that's valuable because as an educational provider and as a funder and frankly yours as well from the backgrounds that you both have.
Thank you so much for joining us. Part one is excellent. You've laid it up there for your colleagues who are going to come in and talk about the results and what the future looks like to them.
Hilary Schmidt: Thanks so much for having us.
Alicia Sutton: Again, we are broadcasting from the Alliance for Continuing Education in the Health Professions at the annual meeting. Thank you again.
John Ruggiero: Thank you.
Hilary Schmidt: Thank you.
PART TWO
Alicia Sutton: Welcome back. This is part two of an interview. We are at the Alliance for Continuing Education in the Health Professions at their annual meeting. In part one, we were talking with your colleagues about predicting the potential for a CE activity to actually lead to behavior change and they set it up for us to understand what the problem statement is and you all are going to help us understand what the results were of what you've done. I would like you to introduce yourselves to our audience here.
Nili Solomonov: I am Nili Solomonov. I am a researcher based at Adelphi University and at Calibre Institute for quality of medical education.
Alicia Sutton: Thank you for joining us.
Greg Salinas: I am Greg Salinas. I am the President of CE Outcomes.
Alicia Sutton: Thank you for joining us. So we understand obviously that there is a way to measure that quality. You went about a process to develop, can you talk through that process?
Nili Solomonov: Absolutely. As Hilary already mentioned, we identified the instructional design features that are most likely to predict behavioral change based on metaanalysis and literature review, and then we operationalize those instructional design features into items, scale items that are able to be coded by non-expert trained coders. Our scale has two dimensions; frequency, to what extent is an instructional design used in an activity; very frequently, only once, twice, and then quality. How well is it being used because a feature could be used once but very well or many times but done very poorly. Then we recruited coders; four Ph.D. level, non-expert coders because we wanted to produce scores that would be as objective as possible and not based on expertise and prior knowledge. We trained those coders and conducted periodic reliability analysis to make sure that those coders really code these activities reliably. So, once the coders coded the activity reliably using the materials that we provided them, we coded the activities provided to us by the supporters.
Alicia Sutton: Excellent. I understand there were 77 activities in the end that were selected. Any big takeaways for you from an instructional design standpoint from that scale?
Nili Solomonov: I think even just now with the sample that we have, we can learn so much about the state of continuing medical education from the sample we have. So, we can see that some instructional design features are used well and quite frequently by many activities and some instructional design features are not being used at all or are being used by only a small amount of activities. So, demonstrations is an example. It's an instructional design feature that we know is really important for driving behavioral change. We are able to not only characterize a specific activity and compare between different activities but also establish a baseline or an average across activities and look at how far a given activity is from the average. We can also look at profiles for the top 5% activities and the bottom 5% of activities and identify what are the features that are characterizing a higher level activity.
Alicia Sutton: Greg, you obviously worked on the outcome side.
Greg Salinas: Sure. So a main component of this in that we have this scale and we want to make sure that this scale is valid. A question that happened from the beginning is do the best programs, as rated by this scale, lead to the best outcomes? For that, all the outcomes were gathered from these programs, all of the reports that were provided to the supporters were given to us to make some sense out of it and to give some kind of standardization to it. That was some of the trouble here because, as anyone who has ever done this before knows that every single provider that does education, even within its own provider sometimes a different reports of a provider are completely different. These different measurements, different scales, different rubrics, different samples, so how do you make one measure of all of them and make it consistent? So that was the struggle that we had. This is a struggle that everyone who is trying to aggregate reports faces.
Alicia Sutton: Interesting. So how did you go about this?
Greg Salinas: It was a process. First we started we were going to do a typical kind of effect size on this. We're going to look at what's the general means of them, what are the standard deviations. As we got deeper, we found that not everyone provided standard deviations for this. Not everyone provided the appropriate score so we had to go back a step. We were initially wanting all level 5 outcomes. Out of the 70 something, we only had five that had level 5 outcomes. We had to go down to level 4, level 3, so really we're kind of making assumptions on assumptions here trying to get what does every one of these agree on. What we ended up with is providing some sort of comparison between the percentage scores within outcomes of the control versus the assessment. Whether that is pre/post or post control, and get them on the same page and standardize just that score so we at least have something we can work with now.
Alicia Sutton: That's excellent. How far away do you think we are from having a tool that can be widespread with educators and funders so they can look at something as well and understand it?
Greg Salinas: I don't think we're far. I think everyone has the capability of doing it. I think we all just have to agree what that is. I think that's going to be the struggle. If you're putting on education there's some things you want to highlight and there are some things you might not want to highlight and that will depend on the program. I think getting everybody on page about maybe some standards, these are the standard things that we want to measure, then you can do some other stuff if you want to. As long as you have those standards I think that's where we need to go. Some sort of standard.
Alicia Sutton: We talked in part one about whether this criteria can be applied across or the scale across any format, right? But there must be a differentiator; you had mentioned demonstration being an excellent educational tool, a learning way. Yet, in an enduring piece where it's flat print enduring versus a video where you really can show demonstration, does your criteria shift in your score card in all of those scales?
Nili Solomonov: Our items are not content dependent or format dependent. We've seen examples like this. We can have a written activity that actually provides with excellent demonstrations using images and brief texts that demonstrate an interaction between a patient and a physician. Our experience, we'll need to provide you with an empirical quantitative response to that, but our experience watching all these activities is that we can really see the features present in all formats.
Alicia Sutton: That's very good. Where do you see education changing as this potential model gets accepted? Where do you hope that it will go?
Nili Solomonov: It's been such a pleasure to be in this conference. This is a collaborative study and this is what has been so amazing and fun about it because we're collaborating with supporters, with learning instructional design experts, with CE Outcome experts, and so we're getting all of these different perspectives about how this scale could be used and people are reaching out to us and saying, as a provider this could help me create better education or this could help me understand what it is that I'm providing to my customers. Supporters, you know, you've already heard from John. I think that this scale is really; its strength is in that it is relevant for everyone involved in education. I would love to see it really get accepted by the community and as a service to the community.
Alicia Sutton: I think every stakeholder, like you said, is impacted.
Greg Salinas: I think everyone understanding what it is that's happening here and that we're not trying to change the way people are doing their education, we're just trying to see what works and what doesn't. That has been the question since the beginning. What works and what doesn't and how do we make those changes, how do we improve what we're doing, which will not only improve clinician practice but also lives. I think that's the ultimate goal here is what can we do to really make a difference?
Alicia Sutton: Excellent. Well, that's great insight between what your colleagues did in part one and what you've shed light on here with the tool. Thank you. We really appreciate it.
Nili Solomonov: Thank you.
Greg Salinas: Thank you.
Alicia Sutton: We're broadcasting again from the Alliance for Continuing Education in the Health Professions annual meeting. I really appreciate your time.
You've been listening to Lifelong Learning on ReachMD, featuring key insights from the Alliance’s 2018 annual meeting. To download this podcast and others in this series, please visit reachmd.com/lifelonglearning.